Fortinet Acquires Next DLP Strengthens its Top-Tier Unified SASE Solution Read the release
Updated: Jun 21, 2024   |   Alan Brown

Top security takeaways from Apple WWDC 2024

Go back

Earlier this month was filled with announcements from Apple's annual developer conference, WWDC. Most notable amongst these was Apple's much anticipated AI pitch. In keeping with Apple's highly advertised positions, user privacy was given top billing. Also heavily discussed and reported was Apple's integration with OpenAI's ChatGPT. What are the security implications for your most sensitive data, with these features being woven into the core of macOS, iPhone and other Apple products?

AI Integration and User Privacy

Apple showed off a number of features such as handwriting recognition, mail priority sorting and others that may benefit from LLM-based machine learning solutions. Exploring the security takeaways from Apple WWDC 2024, it's crucial to understand the implications of these advancements on user data protection. To accomplish this, an ML model Apple calls Foundation, runs locally on-device. Compared to other solutions such as Microsoft's Copilot, this means that your data remains on your device and isn't used either as a training source for a remote model, or otherwise misused by the vendor.

Private Cloud Compute: How Apple is Scaling Up AI

Unfortunately, powerful as iPhones and Macs are, some features require CPU capabilities or memory that simply aren't available on a user device. Apple's solution to this is to scale up their Foundation Model using something called, Private Cloud Compute (PCC). In this case Apple ships the user data to their servers in their data centers for processing. Apple makes some pretty bold claims about the privacy of how user data is handled in this remote case. They claim that information sent to their service will not be available to Apple under any circumstances. This is accomplished by creating a secure channel between the user device and the specific server processing the data, rather than an edge element in their cloud. Apple claims to use the data only to process the request, and never for training. They have also constructed their own server hardware (based on their M-series chips) running a stripped down version of macOS/iOS that provides a number of security features to prevent user data being accidentally retained, or accessed by an insider threat actor. Finally, Apple has committed to publishing all released binaries of their server software, and a test environment that allows an independent security researcher to verify the function of the system. This is similar to the Security Research Device program they run for iPhone (https://security.apple.com/research-device/). Other lower-level components are to be completely open sourced, like the bootloader, and the OS running on the Secure Enclave that performs cryptographic key management tasks for independent verification. Part of submitting requests to PCC requires the user device to cryptographically verify the software running on the remote server before user data is sent. This attempts to make similar guarantees to signing local binary files so that their integrity and provenance can be verified.

Apple's Collaboration with Open AI: ChatGPT Integration

Despite the impressive demos, Apple clearly recognizes the limits of their own technology and have inked a widely-leaked deal with OpenAI for access to their ChatGPT technology. This is used with Siri in situations where it can't answer a query using either local device ML, or Apple's Cloud ML. The interaction is explicit, and requires user confirmation before continuing.

Comparing Apple's Privacy Measures with Other Vendors

Contrasting Apple's approach to other vendors, they have taken extraordinary steps to improve the privacy of user data when using machine learning technology. However, using Apple's cloud service still requires a degree of trust in the vendor's promises. One thing widely known about LLMs is that they require vast quantities of information to process as training data. Other vendor models like ChatGPT use data submitted by users as part of this. It remains to be seen how Apple will handle this with their own models over time given the highly competitive landscape. With the integration of ChatGPT (and promises of other services being integrated over time) accidental release of personal or organization data to one of these services is still possible. Based on the demos so far, in a personal setting this is an explicit choice each time. However, it's unclear what administrative controls Apple will provide should an organization such as an electric car manufacturer, microblogging service or commercial space launch company wish to avoid leaking sensitive data to OpenAI or others.

Ensuring Security with Third-Party Services

Apple has made some bold privacy claims, and has promised to put extraordinary effort into backing them up. That said, organizations should maintain a healthy skepticism regarding any information transfer to a third party. Any remote third-party service handling your sensitive data can have compliance or legal requirements depending on your industry or location. With low friction access third party services such as ChatGPT, working with controlled document data on your device, or even just conversationally discussing something with ChatGPT may cause accidental data loss. Easy access to third party services such as ChatGPT may turn your phone from a helpful assistant, into that friend who can't keep a secret! Apple's many promises have yet to be tested in the real world, will they continue to commit to the level of service they've promised today?

Preparing for AI-Enhanced Apple OS

Migrating to these AI laced versions of Apple's OS should be done with care. Proper investigation of both organization wide fleet management controls via MDM to control the availability of these capabilities will be an ongoing process. Use of security tools, such as the Reveal Agent, can provide visibility of where sensitive data is going and help prevent your organization's sensitive data being the next response to a question on ChatGPT.

The Reveal Platform: Ensuring Secure Adoption of macOS 15

Reveal Agent is already undergoing extensive testing with early betas of macOS 15, with an aim of providing immediate support on macOS 15 release day so your organization can move swiftly to the next version of macOS on your timetable.

Demo

See how Next protects your employees and prevents data loss